Goto

Collaborating Authors

 collective variable


FromBiasedtoUnbiasedDynamics: AnInfinitesimalGeneratorApproach

Neural Information Processing Systems

Toovercome this bottleneck, data are collected via biased simulations that explore the state space more rapidly. Wepropose aframeworkforlearning frombiased simulations rooted in the infinitesimal generator of the process and the associated resolvent operator. Wecontrast our approach to more common ones based on the transfer operator, showing thatitcanprovably learn thespectral properties oftheunbiased system frombiaseddata.


Enhanced Sampling for Efficient Learning of Coarse-Grained Machine Learning Potentials

Chen, Weilong, Görlich, Franz, Fuchs, Paul, Zavadlav, Julija

arXiv.org Artificial Intelligence

Coarse-graining (CG) enables molecular dynamics (MD) simulations of larger systems and longer timescales that are otherwise infeasible with atomistic models. Machine learning potentials (MLPs), with their capacity to capture many-body interactions, can provide accurate approximations of the potential of mean force (PMF) in CG models. Current CG MLPs are typically trained in a bottom-up manner via force matching, which in practice relies on configurations sampled from the unbiased equilibrium Boltzmann distribution to ensure thermodynamic consistency. This convention poses two key limitations: first, sufficiently long atomistic trajectories are needed to reach convergence; and second, even once equilibrated, transition regions remain poorly sampled. To address these issues, we employ enhanced sampling to bias along CG degrees of freedom for data generation, and then recompute the forces with respect to the unbiased potential. This strategy simultaneously shortens the simulation time required to produce equilibrated data and enriches sampling in transition regions, while preserving the correct PMF. We demonstrate its effectiveness on the Müller-Brown potential and capped alanine, achieving notable improvements. Our findings support the use of enhanced sampling for force matching as a promising direction to improve the accuracy and reliability of CG MLPs.


Learning collective variables that preserve transition rates

Sule, Shashank, Mehta, Arnav, Cameron, Maria K.

arXiv.org Machine Learning

Collective variables (CVs) play a crucial role in capturing rare events in high-dimensional systems, motivating the continual search for principled approaches to their design. In this work, we revisit the framework of quantitative coarse graining and identify the orthogonality condition from Legoll and Lelievre (2010) as a key criterion for constructing CVs that accurately preserve the statistical properties of the original process. We establish that satisfaction of the orthogonality condition enables error estimates for both relative entropy and pathwise distance to scale proportionally with the degree of scale separation. Building on this foundation, we introduce a general numerical method for designing neural network-based CVs that integrates tools from manifold learning with group-invariant featurization. To demonstrate the efficacy of our approach, we construct CVs for butane and achieve a CV that reproduces the anti-gauche transition rate with less than ten percent relative error. Additionally, we provide empirical evidence challenging the necessity of uniform positive definiteness in diffusion tensors for transition rate reproduction and highlight the critical role of light atoms in CV design for molecular dynamics.


Neumann eigenmaps for landmark embedding

Sule, Shashank, Czaja, Wojciech

arXiv.org Machine Learning

We present Neumann eigenmaps (NeuMaps), a novel approach for enhancing the standard diffusion map embedding using landmarks, i.e distinguished samples within the dataset. By interpreting these landmarks as a subgraph of the larger data graph, NeuMaps are obtained via the eigendecomposition of a renormalized Neumann Laplacian. We show that NeuMaps offer two key advantages: (1) they provide a computationally efficient embedding that accurately recovers the diffusion distance associated with the reflecting random walk on the subgraph, and (2) they naturally incorporate the Nystr\"om extension within the diffusion map framework through the discrete Neumann boundary condition. Through examples in digit classification and molecular dynamics, we demonstrate that NeuMaps not only improve upon existing landmark-based embedding methods but also enhance the stability of diffusion map embeddings to the removal of highly significant points.


Estimating Committor Functions via Deep Adaptive Sampling on Rare Transition Paths

Wang, Yueyang, Tang, Kejun, Wang, Xili, Wan, Xiaoliang, Ren, Weiqing, Yang, Chao

arXiv.org Machine Learning

The committor functions are central to investigating rare but important events in molecular simulations. It is known that computing the committor function suffers from the curse of dimensionality. Recently, using neural networks to estimate the committor function has gained attention due to its potential for high-dimensional problems. Training neural networks to approximate the committor function needs to sample transition data from straightforward simulations of rare events, which is very inefficient. The scarcity of transition data makes it challenging to approximate the committor function. To address this problem, we propose an efficient framework to generate data points in the transition state region that helps train neural networks to approximate the committor function. We design a Deep Adaptive Sampling method for TRansition paths (DASTR), where deep generative models are employed to generate samples to capture the information of transitions effectively. In particular, we treat a non-negative function in the integrand of the loss functional as an unnormalized probability density function and approximate it with the deep generative model. The new samples from the deep generative model are located in the transition state region and fewer samples are located in the other region. This distribution provides effective samples for approximating the committor function and significantly improves the accuracy. We demonstrate the effectiveness of the proposed method through both simulations and realistic examples.


Machine Learning of Slow Collective Variables and Enhanced Sampling via Spatial Techniques

Gökdemir, Tuğçe, Rydzewski, Jakub

arXiv.org Artificial Intelligence

Understanding the long-time dynamics of complex physical processes depends on our ability to recognize patterns. To simplify the description of these processes, we often introduce a set of reaction coordinates, customarily referred to as collective variables (CVs). The quality of these CVs heavily impacts our comprehension of the dynamics, often influencing the estimates of thermodynamics and kinetics from atomistic simulations. Consequently, identifying CVs poses a fundamental challenge in chemical physics. Recently, significant progress was made by leveraging the predictive ability of unsupervised machine learning techniques to determine CVs. Many of these techniques require temporal information to learn slow CVs that correspond to the long timescale behavior of the studied process. Here, however, we specifically focus on techniques that can identify CVs corresponding to the slowest transitions between states without needing temporal trajectories as input, instead using the spatial characteristics of the data. We discuss the latest developments in this category of techniques and briefly discuss potential directions for thermodynamics-informed spatial learning of slow CVs.


A Note on Spectral Map

Gökdemir, Tuğçe, Rydzewski, Jakub

arXiv.org Artificial Intelligence

In molecular dynamics (MD) simulations, transitions between states are often rare events due to energy barriers that exceed the thermal temperature. Because of their infrequent occurrence and the huge number of degrees of freedom in molecular systems, understanding the physical properties that drive rare events is immensely difficult. A common approach to this problem is to propose a collective variable (CV) that describes this process by a simplified representation. However, choosing CVs is not easy, as it often relies on physical intuition. Machine learning (ML) techniques provide a promising approach for effectively extracting optimal CVs from MD data. Here, we provide a note on a recent unsupervised ML method called spectral map, which constructs CVs by maximizing the timescale separation between slow and fast variables in the system.


Collective variables of neural networks: empirical time evolution and scaling laws

Tovey, Samuel, Krippendorf, Sven, Spannowsky, Michael, Nikolaou, Konstantin, Holm, Christian

arXiv.org Artificial Intelligence

This work presents a novel means for understanding learning dynamics and scaling relations in neural networks. We show that certain measures on the spectrum of the empirical neural tangent kernel, specifically entropy and trace, yield insight into the representations learned by a neural network and how these can be improved through architecture scaling. These results are demonstrated first on test cases before being shown on more complex networks, including transformers, auto-encoders, graph neural networks, and reinforcement learning studies. In testing on a wide range of architectures, we highlight the universal nature of training dynamics and further discuss how it can be used to understand the mechanisms behind learning in neural networks. We identify two such dominant mechanisms present throughout machine learning training. The first, information compression, is seen through a reduction in the entropy of the NTK spectrum during training, and occurs predominantly in small neural networks. The second, coined structure formation, is seen through an increasing entropy and thus, the creation of structure in the neural network representations beyond the prior established by the network at initialization. Due to the ubiquity of the latter in deep neural network architectures and its flexibility in the creation of feature-rich representations, we argue that this form of evolution of the network's entropy be considered the onset of a deep learning regime.


Spectral Map for Slow Collective Variables, Markovian Dynamics, and Transition State Ensembles

Rydzewski, Jakub

arXiv.org Artificial Intelligence

Understanding the behavior of complex molecular systems is a fundamental problem in physical chemistry. To describe the long-time dynamics of such systems, which is responsible for their most informative characteristics, we can identify a few slow collective variables (CVs) while treating the remaining fast variables as thermal noise. This enables us to simplify the dynamics and treat it as diffusion in a free-energy landscape spanned by slow CVs, effectively rendering the dynamics Markovian. Our recent statistical learning technique, spectral map [Rydzewski, J. Phys. Chem. Lett. 2023, 14, 22, 5216-5220], explores this strategy to learn slow CVs by maximizing a spectral gap of a transition matrix. In this work, we introduce several advancements into our framework, using a high-dimensional reversible folding process of a protein as an example. We implement an algorithm for coarse-graining Markov transition matrices to partition the reduced space of slow CVs kinetically and use it to define a transition state ensemble. We show that slow CVs learned by spectral map closely approach the Markovian limit for an overdamped diffusion. We demonstrate that coordinate-dependent diffusion coefficients only slightly affect the constructed free-energy landscapes. Finally, we present how spectral map can be used to quantify the importance of features and compare slow CVs with structural descriptors commonly used in protein folding. Overall, we demonstrate that a single slow CV learned by spectral map can be used as a physical reaction coordinate to capture essential characteristics of protein folding.


Learning Collective Variables for Protein Folding with Labeled Data Augmentation through Geodesic Interpolation

Yang, Soojung, Nam, Juno, Dietschreit, Johannes C. B., Gómez-Bombarelli, Rafael

arXiv.org Artificial Intelligence

In molecular dynamics (MD) simulations, rare events, such as protein folding, are typically studied by means of enhanced sampling techniques, most of which rely on the definition of a collective variable (CV) along which the acceleration occurs. Obtaining an expressive CV is crucial, but often hindered by the lack of information about the particular event, e.g., the transition from unfolded to folded conformation. We propose a simulation-free data augmentation strategy using physics-inspired metrics to generate geodesic interpolations resembling protein folding transitions, thereby improving sampling efficiency without true transition state samples. Leveraging interpolation progress parameters, we introduce a regression-based learning scheme for CV models, which outperforms classifier-based methods when transition state data is limited and noisy